Goto

Collaborating Authors

 Homewood


A Detailed Study on LLM Biases Concerning Corporate Social Responsibility and Green Supply Chains

Ontrup, Greta, Bush, Annika, Pauly, Markus, Aksoy, Meltem

arXiv.org Artificial Intelligence

Organizations increasingly use Large Language Models (LLMs) to improve supply chain processes and reduce environmental impacts. However, LLMs have been shown to reproduce biases regarding the prioritization of sustainable business strategies. Thus, it is important to identify underlying training data biases that LLMs pertain regarding the importance and role of sustainable business and supply chain practices. This study investigates how different LLMs respond to validated surveys about the role of ethics and responsibility for businesses, and the importance of sustainable practices and relations with suppliers and customers. Using standardized questionnaires, we systematically analyze responses generated by state-of-the-art LLMs to identify variations. We further evaluate whether differences are augmented by four organizational culture types, thereby evaluating the practical relevance of identified biases. The findings reveal significant systematic differences between models and demonstrate that organizational culture prompts substantially modify LLM responses. The study holds important implications for LLM-assisted decision-making in sustainability contexts.


The Role of Explainable AI in Revolutionizing Human Health Monitoring

Alharthi, Abdullah, Alqurashi, Ahmed, Alharbi, Turki, Alammar, Mohammed, Aldosari, Nasser, Bouchekara, Houssem, Shaaban, Yusuf, Shahriar, Mohammad Shoaib, Ayidh, Abdulrahman Al

arXiv.org Artificial Intelligence

The complex nature of disease mechanisms and the variability of patient symptoms present significant obstacles in developing effective diagnostic tools. Although machine learning has made considerable advances in medical diagnosis, its decision-making processes frequently lack transparency, which can jeopardize patient outcomes. This underscores the critical need for Explainable AI (XAI), which not only offers greater clarity but also has the potential to significantly improve patient care. In this literature review, we conduct a detailed analysis of analyzing XAI methods identified through searches across various databases, focusing on chronic conditions such as Parkinson's, stroke, depression, cancer, heart disease, and Alzheimer's disease. The literature search revealed the application of 9 trending XAI algorithms in the field of healthcare and highlighted the pros and cons of each of them. Thus, the article is concluded with a critical appraisal of the challenges and future research opportunities for XAI in human health monitoring.


Distilling System 2 into System 1

Yu, Ping, Xu, Jing, Weston, Jason, Kulikov, Ilia

arXiv.org Artificial Intelligence

Large language models (LLMs) can spend extra compute during inference to generate intermediate thoughts, which helps to produce better final responses. Since Chain-of-Thought (Wei et al., 2022), many such System 2 techniques have been proposed such as Rephrase and Respond (Deng et al., 2023a), System 2 Attention (Weston and Sukhbaatar, 2023) and Branch-Solve-Merge (Saha et al., 2023). In this work we investigate self-supervised methods to ``compile'' (distill) higher quality outputs from System 2 techniques back into LLM generations without intermediate reasoning token sequences, as this reasoning has been distilled into System 1. We show that several such techniques can be successfully distilled, resulting in improved results compared to the original System 1 performance, and with less inference cost than System 2. We posit that such System 2 distillation will be an important feature of future continually learning AI systems, enabling them to focus System 2 capabilities on the reasoning tasks that they cannot yet do well.


Model free variable importance for high dimensional data

Hama, Naofumi, Mase, Masayoshi, Owen, Art B.

arXiv.org Artificial Intelligence

A model-agnostic variable importance method can be used with arbitrary prediction functions. Here we present some model-free methods that do not require access to the prediction function. This is useful when that function is proprietary and not available, or just extremely expensive. It is also useful when studying residuals from a model. The cohort Shapley (CS) method is model-free but has exponential cost in the dimension of the input space. A supervised on-manifold Shapley method from Frye et al. (2020) is also model free but requires as input a second black box model that has to be trained for the Shapley value problem. We introduce an integrated gradient (IG) version of cohort Shapley, called IGCS, with cost $\mathcal{O}(nd)$. We show that over the vast majority of the relevant unit cube that the IGCS value function is close to a multilinear function for which IGCS matches CS. Another benefit of IGCS is that is allows IG methods to be used with binary predictors. We use some area between curves (ABC) measures to quantify the performance of IGCS. On a problem from high energy physics we verify that IGCS has nearly the same ABCs as CS does. We also use it on a problem from computational chemistry in 1024 variables. We see there that IGCS attains much higher ABCs than we get from Monte Carlo sampling. The code is publicly available at https://github.com/cohortshapley/cohortintgrad


Variable importance without impossible data

Mase, Masayoshi, Owen, Art B., Seiler, Benjamin B.

arXiv.org Artificial Intelligence

The most popular methods for measuring importance of the variables in a black box prediction algorithm make use of synthetic inputs that combine predictor variables from multiple subjects. These inputs can be unlikely, physically impossible, or even logically impossible. As a result, the predictions for such cases can be based on data very unlike any the black box was trained on. We think that users cannot trust an explanation of the decision of a prediction algorithm when the explanation uses such values. Instead we advocate a method called Cohort Shapley that is grounded in economic game theory and unlike most other game theoretic methods, it uses only actually observed data to quantify variable importance. Cohort Shapley works by narrowing the cohort of subjects judged to be similar to a target subject on one or more features. We illustrate it on an algorithmic fairness problem where it is essential to attribute importance to protected variables that the model was not trained on.


Performance, Opaqueness, Consequences, and Assumptions: Simple questions for responsible planning of machine learning solutions

Biecek, Przemyslaw

arXiv.org Artificial Intelligence

The data revolution has generated a huge demand for data-driven solutions. This demand propels a growing number of easy-to-use tools and training for aspiring data scientists that enable the rapid building of predictive models. Today, weapons of math destruction can be easily built and deployed without detailed planning and validation. This rapidly extends the list of AI failures, i.e. deployments that lead to financial losses or even violate democratic values such as equality, freedom and justice. The lack of planning, rules and standards around the model development leads to the ,,anarchisation of AI". This problem is reported under different names such as validation debt, reproducibility crisis, and lack of explainability. Post-mortem analysis of AI failures often reveals mistakes made in the early phase of model development or data acquisition. Thus, instead of curing the consequences of deploying harmful models, we shall prevent them as early as possible by putting more attention to the initial planning stage. In this paper, we propose a quick and simple framework to support planning of AI solutions. The POCA framework is based on four pillars: Performance, Opaqueness, Consequences, and Assumptions. It helps to set the expectations and plan the constraints for the AI solution before any model is built and any data is collected. With the help of the POCA method, preliminary requirements can be defined for the model-building process, so that costly model misspecification errors can be identified as soon as possible or even avoided. AI researchers, product owners and business analysts can use this framework in the initial stages of building AI solutions.


Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models

Lengerich, Benjamin, Tan, Sarah, Chang, Chun-Hao, Hooker, Giles, Caruana, Rich

arXiv.org Artificial Intelligence

Recent methods for training generalized additive models (GAMs) with pairwise interactions achieve state-of-the-art accuracy on a variety of datasets. Adding interactions to GAMs, however, introduces an identifiability problem: effects can be freely moved between main effects and interaction effects without changing the model predictions. In some cases, this can lead to contradictory interpretations of the same underlying function. This is a critical problem because a central motivation of GAMs is model interpretability. In this paper, we use the Functional ANOV A decomposition to uniquely define interaction effects and thus produce identifiable additive models with purified interactions. To compute this decomposition, we present a fast, exact, mass-moving algorithm that transforms any piecewise-constant function (such as a tree-based model) into a purified, canonical representation. We apply this algorithm to several datasets and show large disparity, including contradictions, between the apparent and the purified effects. An important question in data analysis is whether two variables act in concert to affect an outcome. But this unconstrained additive model has fundamental flaws.


On Prediction and Tolerance Intervals for Dynamic Treatment Regimes

Lizotte, Daniel J., Tahmasebi, Arezoo

arXiv.org Machine Learning

We develop and evaluate tolerance interval methods for dynamic treatment regimes (DTRs) that can provide more detailed prognostic information to patients who will follow an estimated optimal regime. Although the problem of constructing confidence intervals for DTRs has been extensively studied, prediction and tolerance intervals have received little attention. We begin by reviewing in detail different interval estimation and prediction methods and then adapting them to the DTR setting. We illustrate some of the challenges associated with tolerance interval estimation stemming from the fact that we do not typically have data that were generated from the estimated optimal regime. We give an extensive empirical evaluation of the methods and discussed several practical aspects of method choice, and we present an example application using data from a clinical trial. Finally, we discuss future directions within this important emerging area of DTR research.


Deliberation and its Role in the Formation of Intentions

Rao, Anand S., Georgeff, Michael P.

arXiv.org Artificial Intelligence

Deliberation plays an important role in the design of rational agents embedded in the real-world. In particular, deliberation leads to the formation of intentions, i.e., plans of action that the agent is committed to achieving. In this paper, we present a branching time possible-worlds model for representing and reasoning about, beliefs, goals, intentions, time, actions, probabilities, and payoffs. We compare this possible-worlds approach with the more traditional decision tree representation and provide a transformation from decision trees to possible worlds. Finally, we illustrate how an agent can perform deliberation using a decision-tree representation and then use a possible-worlds model to form and reason about his intentions.